599 research outputs found

    The complexity dilemma: Three tips for dealing with complexity in organizations

    Get PDF
    Today storm-tossed markets call managers to take a stand on the rising up of external complexity. Organisations are constantly facing a crossroad (complexity dilemma): To accept and nurture complexity, or to avoid and reduce it. The first option can be traced back to Ashby\u2019s Law of Requisite Variety, 1. while the second comes from Luhmann\u2019s Complexity Reduction, 2. Both Ashby and Luhmann theories are valid due to an inverted U-shaped relation between complexity and firm\u2019s performance, called \u201ccomplexity curve\u201d. Once fixed the amount of external complexity, performance increase as internal complexity increase, till reaching a tipping point; after that point, an overburden of complexity sinks performance. To solve Ashby-Luhmann trade-off on complexity, and moving over the complexity curve, we suggest that complex organizing may be facilitated by a simple design through (i) modularity, (ii) simple rules, and (iii) organisational capabilities

    Complessit\ue0 e capability organizzative: sviluppo ed applicazione di una metodologia di analisi della complessit\ue0 e delle capability organizzative

    Get PDF
    In recent years, the growing up of complexity in markets and organizations kept scholars and professionals attention. Issues on how to manage Complex Systems are gaining deep interest. A stream of literature investigates reactions processes in organization to the increasing of external complexity. These processes can be ruled by increasing internal complexity as per Ashby\u2019s Law of Requisite Variety (1956) or by selecting external complexity as per Luhmann\u2019s Complexity Reduction (1984). Moreover, recent studies focused on organisational capabilities as a way to manage complexity and support long term performance in organisations (Garengo & Bernardi, 2007). This research investigates the dimensions of internal complexity, external complexity, and organizational capabilities in organizations, and their impact on performance. For studying these relationships a methodology, called Complexity Assessment Methodology (CAM), has been developed and tested. From literature the following literature gaps emerged: there is still lack of clarity about complexity definition and its main dimensions; a systematic literature review of organizational capabilities is missing; empirical research assessing the impact of complexity and capabilities on firms performances is still poor. Thus, the following research questions were made: What are the dimensions of external and internal complexity? How can organizations manage complexity through organizational capabilities? How can be developed a methodology assessing and linking internal complexity, external complexity, organizational capabilities and firm performance? From literature review on complexity and organizational capabilities theories four dimensions of external and internal complexity were derived - interdependence, diversity, uncertainty, and dynamicity - and four main organizational capabilities to face complexity were defined, namely redundancy, interconnection, sharing and reconfiguration. In order to answer the third research question a a methodology to assess effects on performance of the relationships among internal complexity, external complexity and capabilities was developed. A pilot case study in UniCredit business Integrated Solutions (UBIS) and subsequently three case studies in Coop Italia, Coop Liguria Euris were carried out in order to test the CAM.. Data from multiple case studies shown that (i) the ratio between internal complexity and external complexity influences performance through an inverted U-shaped function, called \u201ccomplexity curve\u201d. Once fixed the amount of external complexity, performance increases as internal complexity increases, till reaching a tipping point. After that point, an overburden of complexity starts to sink performance; (ii) the ratio between capabilities and internal complexity influences performance through an inverted U-shaped curve. Undersized capabilities show firms\u2019 inability of facing and managing internal complexity. But also oversized capabilities sink performance generating inefficiency (costs). From a theoretical point of view the research permitted to build a methodology for modelling (sizing) external complexity, internal complexity and organisational capabilities, and linking them to performance. From a practical point of view the research shows evidences form different case studies on how complexity and capabilities were managed and developed. Moreover the CAM can be useful to managers for defining and measuring internal complexity and then optimizing capabilities in order to maximize performance.Nei recenti anni, studiosi e professionisti hanno dedicato molta attenzione all\u2019aumento della complessit\ue0 dei mercati e delle organizzazioni ed i problemi di gestione dei sistemi complessi stanno guadagnando sempre pi\uf9 interesse. Un filone di ricerca sta investigando i processi di risposta delle organizzazioni all\u2019aumento della complessit\ue0 esterna, processi che possono essere regolati costruendo complessit\ue0 interna - legge della variet\ue0 necessaria di Ashby (1956) - o selezionando complessit\ue0 esterna - riduzione di complessit\ue0 di Luhmann (1984). Le capability organizzative sono uno strumento efficace per gestire la complessit\ue0 e supportare le prestazioni a lungo termine nelle organizzazioni (Garengo e Bernardi, 2007). Il lavoro di ricerca indaga la complessit\ue0 interna, la complessit\ue0 esterna, le capability organizzative e l\u2019impatto di queste tre dimensioni sulle prestazioni. A tal fine \ue8 sviluppata una metodologia di analisi delle relazioni fra queste dimensioni e le prestazioni definita Complexity Assessment Methodology (CAM). Dalla revisione della letteratura sono emersi i seguenti tre gap: mancanza di chiarezza sulla definizione delle principali dimensioni e misure della complessit\ue0; assenza di una review sistematica della letteratura sulle capability organizzative; la ricerca empirica degli effetti sulle prestazioni delle relazione fra complessit\ue0 e capability \ue8 ancora in fase embrionale. Da questi gap sono derivate le domande di ricerca: quali sono le dimensioni caratterizzanti la misura della complessit\ue0 dell\u2019ambiente competitivo e dell\u2019organizzazione? Come possono le organizzazioni gestire la complessit\ue0 attraverso le capability organizzative? Come pu\uf2 essere strutturata una metodologia di analisi della complessit\ue0 dell\u2019ambiente, dell\u2019organizzazione e delle capability organizzative sviluppate per gestirla? Le revisioni della letteratura sulla Complessit\ue0 e sulle Capability organizzative evidenziano quattro dimensioni principali della complessit\ue0 interna ed esterna: interdipendenza, diversit\ue0, incertezza e dinamicit\ue0; e quattro capability principali per affrontare la complessit\ue0: ridondanza, interconnessione, condivisione e riconfigurazione. Per rispondere alla terza domanda di ricerca, \ue8 stata sviluppata e testata una metodologia di assessment che identifica gli effetti del rapporto tra complessit\ue0 interna e complessit\ue0 esterna e capability sulle prestazioni. Per testare la CAM sono stati condotti un caso studio pilota in UniCredit Business Integrated Solutions (UBIS) e successivamente tre casi studio in Coop Italia, Coop Liguria, ed Euris. Dai casi studio condotti si deduce che (i) il rapporto tra complessit\ue0 interna e complessit\ue0 esterna influenza le prestazioni attraverso una funzione a forma di U rovesciata chiamata \u201ccurva della complessit\ue0\u201d. Una volta fissato il livello della complessit\ue0 esterna, le prestazioni aumentano con l\u2019aumentare della complessit\ue0 interna, fino a raggiungere un punto di ottimo. Dopo questo punto, un sovraccarico di complessit\ue0 inizia a ridurre le prestazioni; (ii) il rapporto tra capability e complessit\ue0 interna influenza le prestazioni attraverso un curva a forma di U rovesciata. Livelli di capability sottodimensionati evidenziano l\u2019incapacit\ue0 di affrontare e gestire la complessit\ue0 interna cos\uec come un eccesso di capability riduce le prestazioni generando inefficienze. Le implicazioni teoriche della ricerca sono costituite dallo sviluppo della metodologia CAM e per lo studio delle relazioni fra complessit\ue0 esterna, interna, capability e prestazioni. Da un punto di vista pratico la ricerca riporta evidenze empiriche da pi\uf9 casi studio su come la complessit\ue0 e le capability siano state gestite e/o sviluppate. La CAM \ue8 infine utile ai manager per definire e misurare i livelli di complessit\ue0 esterna ed interna e quindi ottimizzare il livello di capability della propria organizzazione al fine di massimizzare le prestazioni

    Cardiac magnetic resonance-guided cardiac ablation: a case series of an early experience

    Get PDF
    Radiofrequency (RF) catheter ablation has become a widely used therapeutic approach. However, long-term results in terms of arrhythmia recurrence are still suboptimal. Cardiac magnetic resonance (CMR) could offer a valuable tool to overcome this limitation, with the possibility of targeting the arrhythmic substrate and evaluating the location, depth, and possible gaps of RF lesions. Moreover, real-time CMR-guided procedures offer a radiation-free approach with an evaluation of anatomical structures, substrates, RF lesions, and possible complications during a single procedure. The first steps in the field have been made with cavotricuspid isthmus ablation, showing similar procedural duration and success rate to standard fluoroscopy-guided procedures, while allowing visualization of anatomic structures and RF lesions. These promising results open the path for further studies in the context of more complex arrhythmias, like atrial fibrillation and ventricular tachycardias. Of note, setting up an interventional CMR (iCMR) centre requires safety and technical standards, mostly related to the need for CMR-compatible equipment and medical staff's educational training. For the cardiac imagers, it is fundamental to provide correct CMR sequences for catheter tracking and guide RF delivery. At the same time, the electrophysiologist needs a rapid interpretation of CMR images during the procedures. The aim of this paper is first to review the logistic and technical aspects of setting up an iCMR suite. Then, we will describe the experience in iCMR-guided flutter ablations of two European centres, Policlinico Casilino in Rome, Italy, and Haga Teaching Hospital in The Hague, the Netherlands

    VALS: Virtual Alliances for Learning Society

    Get PDF
    [EN] VALS has the aims of establishing sustainable methods and processes to build knowledge partnerships between Higher Education and companies to collaborate on resolving authentic business problems through open innovation mediated by the use of Open Source Software. Open Source solutions provide the means whereby educational institutions, students, businesses and foundations can all collaborate to resolve authentic business problems. Not only Open Software provides the necessary shared infrastructure and collaborative practice, the foundations that manage the software are also hubs, which channel the operational challenges of their users through to the people who can solve them. This has great potential for enabling students and supervisors to collaborate in resolving the problems of businesses, but is constrained by the lack of support for managing and promoting collaboration across the two sectors. VALS should 1) provide the methods, practice, documentation and infrastructure to unlock this potential through virtual placements in businesses and other public and private bodies; and 2) pilot and promote these as the “Semester of Code”. To achieve its goals the project develops guidance for educational institutions, and for businesses and foundations, detailing the opportunities and the benefits to be gained from the Semester of Code, and the changes to organisation and practice required. A Virtual Placement System is going to be developed, adapting Apache Melange, and extending it where necessary. In piloting, the necessary adaptations to practice will be carried out, particularly in universities, and commitments will be established between problem owners and applicants for virtual placements

    Physics case for an LHCb Upgrade II - Opportunities in flavour physics, and beyond, in the HL-LHC era

    Get PDF
    The LHCb Upgrade II will fully exploit the flavour-physics opportunities of the HL-LHC, and study additional physics topics that take advantage of the forward acceptance of the LHCb spectrometer. The LHCb Upgrade I will begin operation in 2020. Consolidation will occur, and modest enhancements of the Upgrade I detector will be installed, in Long Shutdown 3 of the LHC (2025) and these are discussed here. The main Upgrade II detector will be installed in long shutdown 4 of the LHC (2030) and will build on the strengths of the current LHCb experiment and the Upgrade I. It will operate at a luminosity up to 2×1034 cm−2s−1, ten times that of the Upgrade I detector. New detector components will improve the intrinsic performance of the experiment in certain key areas. An Expression Of Interest proposing Upgrade II was submitted in February 2017. The physics case for the Upgrade II is presented here in more depth. CP-violating phases will be measured with precisions unattainable at any other envisaged facility. The experiment will probe b → sl+l−and b → dl+l− transitions in both muon and electron decays in modes not accessible at Upgrade I. Minimal flavour violation will be tested with a precision measurement of the ratio of B(B0 → ÎŒ+Ό−)/B(Bs → ÎŒ+Ό−). Probing charm CP violation at the 10−5 level may result in its long sought discovery. Major advances in hadron spectroscopy will be possible, which will be powerful probes of low energy QCD. Upgrade II potentially will have the highest sensitivity of all the LHC experiments on the Higgs to charm-quark couplings. Generically, the new physics mass scale probed, for fixed couplings, will almost double compared with the pre-HL-LHC era; this extended reach for flavour physics is similar to that which would be achieved by the HE-LHC proposal for the energy frontier

    LHCb upgrade software and computing : technical design report

    Get PDF
    This document reports the Research and Development activities that are carried out in the software and computing domains in view of the upgrade of the LHCb experiment. The implementation of a full software trigger implies major changes in the core software framework, in the event data model, and in the reconstruction algorithms. The increase of the data volumes for both real and simulated datasets requires a corresponding scaling of the distributed computing infrastructure. An implementation plan in both domains is presented, together with a risk assessment analysis

    Multidifferential study of identified charged hadron distributions in ZZ-tagged jets in proton-proton collisions at s=\sqrt{s}=13 TeV

    Full text link
    Jet fragmentation functions are measured for the first time in proton-proton collisions for charged pions, kaons, and protons within jets recoiling against a ZZ boson. The charged-hadron distributions are studied longitudinally and transversely to the jet direction for jets with transverse momentum 20 <pT<100< p_{\textrm{T}} < 100 GeV and in the pseudorapidity range 2.5<η<42.5 < \eta < 4. The data sample was collected with the LHCb experiment at a center-of-mass energy of 13 TeV, corresponding to an integrated luminosity of 1.64 fb−1^{-1}. Triple differential distributions as a function of the hadron longitudinal momentum fraction, hadron transverse momentum, and jet transverse momentum are also measured for the first time. This helps constrain transverse-momentum-dependent fragmentation functions. Differences in the shapes and magnitudes of the measured distributions for the different hadron species provide insights into the hadronization process for jets predominantly initiated by light quarks.Comment: All figures and tables, along with machine-readable versions and any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-013.html (LHCb public pages

    Study of the B−→Λc+Λˉc−K−B^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} decay

    Full text link
    The decay B−→Λc+Λˉc−K−B^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} is studied in proton-proton collisions at a center-of-mass energy of s=13\sqrt{s}=13 TeV using data corresponding to an integrated luminosity of 5 fb−1\mathrm{fb}^{-1} collected by the LHCb experiment. In the Λc+K−\Lambda_{c}^+ K^{-} system, the Ξc(2930)0\Xi_{c}(2930)^{0} state observed at the BaBar and Belle experiments is resolved into two narrower states, Ξc(2923)0\Xi_{c}(2923)^{0} and Ξc(2939)0\Xi_{c}(2939)^{0}, whose masses and widths are measured to be m(Ξc(2923)0)=2924.5±0.4±1.1 MeV,m(Ξc(2939)0)=2938.5±0.9±2.3 MeV,Γ(Ξc(2923)0)=0004.8±0.9±1.5 MeV,Γ(Ξc(2939)0)=0011.0±1.9±7.5 MeV, m(\Xi_{c}(2923)^{0}) = 2924.5 \pm 0.4 \pm 1.1 \,\mathrm{MeV}, \\ m(\Xi_{c}(2939)^{0}) = 2938.5 \pm 0.9 \pm 2.3 \,\mathrm{MeV}, \\ \Gamma(\Xi_{c}(2923)^{0}) = \phantom{000}4.8 \pm 0.9 \pm 1.5 \,\mathrm{MeV},\\ \Gamma(\Xi_{c}(2939)^{0}) = \phantom{00}11.0 \pm 1.9 \pm 7.5 \,\mathrm{MeV}, where the first uncertainties are statistical and the second systematic. The results are consistent with a previous LHCb measurement using a prompt Λc+K−\Lambda_{c}^{+} K^{-} sample. Evidence of a new Ξc(2880)0\Xi_{c}(2880)^{0} state is found with a local significance of 3.8 σ3.8\,\sigma, whose mass and width are measured to be 2881.8±3.1±8.5 MeV2881.8 \pm 3.1 \pm 8.5\,\mathrm{MeV} and 12.4±5.3±5.8 MeV12.4 \pm 5.3 \pm 5.8 \,\mathrm{MeV}, respectively. In addition, evidence of a new decay mode Ξc(2790)0→Λc+K−\Xi_{c}(2790)^{0} \to \Lambda_{c}^{+} K^{-} is found with a significance of 3.7 σ3.7\,\sigma. The relative branching fraction of B−→Λc+Λˉc−K−B^{-} \to \Lambda_{c}^{+} \bar{\Lambda}_{c}^{-} K^{-} with respect to the B−→D+D−K−B^{-} \to D^{+} D^{-} K^{-} decay is measured to be 2.36±0.11±0.22±0.252.36 \pm 0.11 \pm 0.22 \pm 0.25, where the first uncertainty is statistical, the second systematic and the third originates from the branching fractions of charm hadron decays.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-028.html (LHCb public pages

    Measurement of the ratios of branching fractions R(D∗)\mathcal{R}(D^{*}) and R(D0)\mathcal{R}(D^{0})

    Full text link
    The ratios of branching fractions R(D∗)≡B(Bˉ→D∗τ−Μˉτ)/B(Bˉ→D∗Ό−ΜˉΌ)\mathcal{R}(D^{*})\equiv\mathcal{B}(\bar{B}\to D^{*}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(\bar{B}\to D^{*}\mu^{-}\bar{\nu}_{\mu}) and R(D0)≡B(B−→D0τ−Μˉτ)/B(B−→D0Ό−ΜˉΌ)\mathcal{R}(D^{0})\equiv\mathcal{B}(B^{-}\to D^{0}\tau^{-}\bar{\nu}_{\tau})/\mathcal{B}(B^{-}\to D^{0}\mu^{-}\bar{\nu}_{\mu}) are measured, assuming isospin symmetry, using a sample of proton-proton collision data corresponding to 3.0 fb−1{ }^{-1} of integrated luminosity recorded by the LHCb experiment during 2011 and 2012. The tau lepton is identified in the decay mode τ−→Ό−ΜτΜˉΌ\tau^{-}\to\mu^{-}\nu_{\tau}\bar{\nu}_{\mu}. The measured values are R(D∗)=0.281±0.018±0.024\mathcal{R}(D^{*})=0.281\pm0.018\pm0.024 and R(D0)=0.441±0.060±0.066\mathcal{R}(D^{0})=0.441\pm0.060\pm0.066, where the first uncertainty is statistical and the second is systematic. The correlation between these measurements is ρ=−0.43\rho=-0.43. Results are consistent with the current average of these quantities and are at a combined 1.9 standard deviations from the predictions based on lepton flavor universality in the Standard Model.Comment: All figures and tables, along with any supplementary material and additional information, are available at https://cern.ch/lhcbproject/Publications/p/LHCb-PAPER-2022-039.html (LHCb public pages

    Validation of Deep Learning techniques for quality augmentation in diffusion MRI for clinical studies

    Get PDF
    The objective of this study is to evaluate the efficacy of deep learning (DL) techniques in improving the quality of diffusion MRI (dMRI) data in clinical applications. The study aims to determine whether the use of artificial intelligence (AI) methods in medical images may result in the loss of critical clinical information and/or the appearance of false information. To assess this, the focus was on the angular resolution of dMRI and a clinical trial was conducted on migraine, specifically between episodic and chronic migraine patients. The number of gradient directions had an impact on white matter analysis results, with statistically significant differences between groups being drastically reduced when using 21 gradient directions instead of the original 61. Fourteen teams from different institutions were tasked to use DL to enhance three diffusion metrics (FA, AD and MD) calculated from data acquired with 21 gradient directions and a b-value of 1000 s/mm2. The goal was to produce results that were comparable to those calculated from 61 gradient directions. The results were evaluated using both standard image quality metrics and Tract-Based Spatial Statistics (TBSS) to compare episodic and chronic migraine patients. The study results suggest that while most DL techniques improved the ability to detect statistical differences between groups, they also led to an increase in false positive. The results showed that there was a constant growth rate of false positives linearly proportional to the new true positives, which highlights the risk of generalization of AI-based tasks when assessing diverse clinical cohorts and training using data from a single group. The methods also showed divergent performance when replicating the original distribution of the data and some exhibited significant bias. In conclusion, extreme caution should be exercised when using AI methods for harmonization or synthesis in clinical studies when processing heterogeneous data in clinical studies, as important information may be altered, even when global metrics such as structural similarity or peak signal-to-noise ratio appear to suggest otherwise
    • 

    corecore